Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add filters

Language
Document Type
Year range
1.
Int J Psychol ; 58(4): 380-387, 2023 Aug.
Article in English | MEDLINE | ID: covidwho-2298849

ABSTRACT

The current study investigated the assessment of depression, anxiety, and stress during normal and COVID-19 pandemic conditions. Generalisability theory (G-theory) was applied to examine stable and dynamic aspects of psychological distress and the overall reliability of the Depression, Anxiety and Stress Scales (DASS-21), using data from two independent samples collected on three occasions with 2- to 4-week intervals. The US data (n = 115) was collected before the COVID-19 pandemic, and the New Zealand (NZ) data (n = 114) was obtained during the pandemic. The total DASS-21 demonstrated excellent reliability in measuring enduring symptoms of psychological distress (G = .94-.96) across both samples. While all DASS-21 subscales demonstrated good reliability with the pre-pandemic US sample, the subscales' reliability was below an acceptable level for the NZ sample. Findings from this study indicate that the overall psychological distress is enduring and can be reliably measured by the DASS-21 across different conditions and populations, while shifts across depression, anxiety and stress levels are likely during emergency and uncertainty, as seen in the COVID-19 pandemic.


Subject(s)
COVID-19 , Depression , Humans , Depression/diagnosis , Depression/epidemiology , Depression/psychology , COVID-19/epidemiology , Pandemics , Reproducibility of Results , Stress, Psychological/psychology , Psychometrics , Anxiety/diagnosis , Anxiety/epidemiology , Anxiety/psychology
2.
Information Processing and Management ; 60(1), 2023.
Article in English | Scopus | ID: covidwho-2242256

ABSTRACT

Research on automated social media rumour verification, the task of identifying the veracity of questionable information circulating on social media, has yielded neural models achieving high performance, with accuracy scores that often exceed 90%. However, none of these studies focus on the real-world generalisability of the proposed approaches, that is whether the models perform well on datasets other than those on which they were initially trained and tested. In this work we aim to fill this gap by assessing the generalisability of top performing neural rumour verification models covering a range of different architectures from the perspectives of both topic and temporal robustness. For a more complete evaluation of generalisability, we collect and release COVID-RV, a novel dataset of Twitter conversations revolving around COVID-19 rumours. Unlike other existing COVID-19 datasets, our COVID-RV contains conversations around rumours that follow the format of prominent rumour verification benchmarks, while being different from them in terms of topic and time scale, thus allowing better assessment of the temporal robustness of the models. We evaluate model performance on COVID-RV and three popular rumour verification datasets to understand limitations and advantages of different model architectures, training datasets and evaluation scenarios. We find a dramatic drop in performance when testing models on a different dataset from that used for training. Further, we evaluate the ability of models to generalise in a few-shot learning setup, as well as when word embeddings are updated with the vocabulary of a new, unseen rumour. Drawing upon our experiments we discuss challenges and make recommendations for future research directions in addressing this important problem. © 2022 The Author(s)

SELECTION OF CITATIONS
SEARCH DETAIL